Members
Overall Objectives
Research Program
Application Domains
Highlights of the Year
New Software and Platforms
New Results
Partnerships and Cooperations
Dissemination
Bibliography
XML PDF e-pub
PDF e-Pub


Section: New Results

Natural Interaction with Robotic Systems

Human Activity recognition on load-sensing floor

Participant : François Charpillet.

In the framework of a collaboration with Lebanese University and CRIStAL laboratory, Lille, we have evaluated this year the capability of the load-sensing floor that we have designed in Nancy, to adress fall detection and activity recognition for elderly people living alone at Home.

The Inria-Nancy sensing floor consists of 104 tiles (60*60 cm). Each tile is equipped with a 3-axis accelerometer in the center of the tile, and four force sensors (strain gauge load cells) positioned at each corner.

The pressure sensors measure the load forces exerted on the floor that can be used to determine, for example, the center of pressure of objects, robots or human beeing on the floor.

This year we have demonstrated that we can also determine the posture or activiy of the monitored person (walking, sitting, standing, falling, etc.) by combining the pressure amount, the pressure duration on a tile, the 3-axis acceleration using a relatively simple algorithm [10], [11].

Human Activity recognition with depth camera

Participants : François Charpillet, Xuan Son Nguyen.

This year, we proposed a new local descriptor for action recognition in depth images. The proposed descriptor relies on surface normals in 4D space of depth, time, spatial coordinates and higher-order partial derivatives of depth values along spatial coordinates. In order to classify actions, we follow the traditional Bag-of-words (BoW) approach, and propose two encoding methods termed Multi-Scale Fisher Vector (MSFV) and Temporal Sparse Coding based Fisher Vector Coding (TSCFVC) to form global representations of depth sequences. The high- dimensional action descriptors resulted from the two encoding methods are fed to a linear SVM for efficient action classification. Our proposed methods are evaluated on two public benchmark datasets, MSRAction3D and MSRGesture3D. The experimental result shows the effectiveness of the proposed methods on both the datasets.

Human Posture Recognition

Participants : François Charpillet, Abdallah Dib, Alain Filbois, Thomas Moinel.

Human pose estimation in realistic world conditions raises multiple challenges such as foreground extraction, background update and occlusion by scene objects. Most of existing approaches were demonstrated in controlled environments. In this work, we propose a framework to improve the performance of existing tracking methods to cope with these problems. To this end, a robust and scalable framework is provided composed of three main stages. In the first one, a probabilistic occupancy grid updated with a Hidden Markov Model used to maintain an up-to-date background and to extract moving persons. The second stage uses component labelling to identify and track persons in the scene. The last stage uses a hierarchical particle filter to estimate the body pose for each moving person. Occlusions are handled by querying the occupancy grid to identify hidden body parts so that they can be discarded from the pose estimation process. We provide a parallel implementation that runs on CPU and GPU at 4 frames per second. We also validate the approach on our own dataset that consists of synchronized motion capture with a single RGB-D camera data of a person performing actions in challenging situations with severe occlusions generated by scene objects. We make this dataset available online (http://www0.cs.ucl.ac.uk/staff/M.Firman/RGBDdatasets/).

Evaluation of control interfaces by non-experts

Participants : Serena Ivaldi, François Charpillet.

In this work, we address the question of user preference for a robotic interface by non-experts (or naive users without training in robotics), after one single evaluation of such an interface on a simple task. This refers to situations when non-experts face the decision of adopting a robot for episodic use (i.e., not a regular continuous use as workers in factories): the ease of use of an interface is crucial for the robot acceptance. We also probe the possible relation between user performance and individual factors. After a focus group study, we chose to compare the robotic arm joystick and a graphical user interface. Then, we studied the user performance and subjective evaluation of the interfaces during an experiment with the robot arm Jaco and 40 healthy adults. Our results show that the user preference for a particular interface does not seem to depend on their performance in using it: for example, many users express their preference for the joystick while they are better performing with the graphical interface. Contrary to our expectations, this result does not seem to relate to the user's individual factors that we evaluate, namely desire for control and negative attitude towards robots.

The preliminary results of this work are published in [23]. A journal paper with the complete results is in preparation. The work was conducted with the master students Sebastian Marichal and Adrien Malaisé.

Robot acceptance and trust

Participant : Serena Ivaldi.

We continued our collaboration with psychologists.

Individual factors and social/physical signals

Participant : Serena Ivaldi.

We finalized our study about the influence of individual factors in the production of social signals during human-humanoid interaction on a collaborative assembly task. We found that the more people are extrovert, the more and longer they tend to talk with the robot, and the more people have a negative attitude towards robots, the less they will look at the robot face and the more they will look at the robot hands where the assembly and the contacts occur. Our results confirm and provide evidence that the engagement models classically used in human-robot interaction should take into account attitudes and personality traits. The results are published in [15].

We started to study the influence of individual factors on physical signals and collaborative movement. We made interesting observations, for example the influence of age and negative attitude towards robots in the amount of exchanged forces. Part of the analysis was performed by the master student Anthony Voilqué. A paper describing our findings is in preparation.

Learning gait models with cheap sensors for applications in EHPADs

Participants : Serena Ivaldi, François Charpillet, Olivier Rochel.

Thanks to the MITACS-Inria grant, we started a collaboration with Prof. Dana Kulic in University of Waterloo on the topic of learning gait models with cheap sensors. Jamie Waugh, master student, visited us for 3 months to start a data collection protocol where several sensors are used to monitor the human gait under different conditions. The aim is to learn gait parameters with different sensors, such as IMUs and Kinect cameras, and to provide quantitative comparison of the accuracy of the estimation provided by the different sensors. As ground truth, the Qualisys motion capture and the Gaitrite walking mat are used. The final goal of the project is to be able to deliver algorithms for estimating gait based on cheap sensors that could be used on a daily basis in healthcare facilities such as EHPADs.